AWS Certified Solutions Architect Associate Practice Test 1 - Results

Question 1: Skipped

A financial application that calculates accruals, interests, and other data is hosted on a fleet of Spot EC2 instances that are configured with Auto Scaling. The application is used by an external reporting application that provides the total calculation for each user account and transaction. You used CloudWatch to automatically monitor the EC2 instance without manually checking the server for high CPU Utilization or crashes.   

What is the time period of data that Amazon CloudWatch receives and aggregates from EC2 by default? 

Explanation

By default, your instance is enabled for basic monitoring. You can optionally enable detailed monitoring. After you enable detailed monitoring, the Amazon EC2 console displays monitoring graphs with a 1-minute period for the instance. The following table describes basic and detailed monitoring for instances.

  1. Basic - Data is available automatically in 5-minute periods at no charge.
  2. Detailed - Data is available in 1-minute periods for an additional cost. To get this level of data, you must specifically enable it for the instance. For the instances where you've enabled detailed monitoring, you can also get aggregated data across groups of similar instances.

 

References:

https://aws.amazon.com/cloudwatch/faqs/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html

 

Check out this Amazon CloudWatch Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-cloudwatch/

Question 2: Skipped
A content management system (CMS) is hosted on a fleet of auto-scaled, On-Demand EC2 instances which use Amazon Aurora as its database. Currently, the system stores the file documents that the users uploaded in one of the attached EBS Volumes. Your manager noticed that the system performance is quite slow and he has instructed you to improve the architecture of the system. In this scenario, what will you do to implement a scalable, high throughput file system?

Explanation

Amazon Elastic File System (Amazon EFS) provides simple, scalable, elastic file storage for use with AWS Cloud services and on-premises resources. When mounted on Amazon EC2 instances, an Amazon EFS file system provides a standard file system interface and file system access semantics, allowing you to seamlessly integrate Amazon EFS with your existing applications and tools. Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time, allowing Amazon EFS to provide a common data source for workloads and applications running on more than one Amazon EC2 instance.

This particular scenario tests your understanding of EBS, EFS, and S3. In this scenario, there is a fleet of On-Demand EC2 instances that stores file documents from the users to one of the attached EBS Volumes. The system performance is quite slow because the architecture doesn't provide the EC2 instances a parallel shared access to the file documents.

Remember that an EBS Volume can be attached to one EC2 instance at a time, hence, no other EC2 instance can connect to that EBS Provisioned IOPS Volume. Take note as well that the type of storage needed here is a "file storage" which means that S3 (Option 1) is not the best service to use because it is mainly used for "object storage", and S3 does not provide the notion of "folders" too. This is why Option 2 is the correct answer.

 

 

Option 3 is incorrect because the scenario requires you to set up a scalable, high throughput storage system that will allow concurrent access from multiple EC2 instances. This is clearly not possible in EBS, even with Provisioned IOPS SSD Volumes. You have to use EFS instead.

Option 4 is incorrect because ElastiCache is an in-memory data store that improves the performance of your applications, which is not what you need since it is not a file storage.

 

Reference:

https://aws.amazon.com/efs/

 

Check out this Amazon EFS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-efs/

 

Check out this Amazon S3 vs EBS vs EFS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3-vs-ebs-vs-efs/

 

Here's a short video tutorial on Amazon EFS:

Question 3: Skipped

You are working for a University as their AWS Consultant. They want to have a disaster recovery strategy in AWS for mission-critical applications after suffering a disastrous outage wherein they lost student and employee records. They don't want this to happen again but at the same time want to minimize the monthly costs. You are instructed to set up a minimum version of the application that is always available in case of any outages.   

Which of the following disaster recovery architectures is the most suitable one to use in this scenario?

Explanation

The correct answer is pilot light.

The term pilot light is often used to describe a DR scenario in which a minimal version of an environment is always running in the cloud. The idea of the pilot light is an analogy that comes from the gas heater. In a gas heater, a small flame that’s always on can quickly ignite the entire furnace to heat up a house. This scenario is similar to a backup-and-restore scenario.

For example, with AWS you can maintain a pilot light by configuring and running the most critical core elements of your system in AWS. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

 

References:

https://media.amazonwebservices.com/AWS_Disaster_Recovery.pdf

Question 4: Skipped

Using the EC2 API, you requested 40 m5.large On-Demand EC2 instances in a single Availability Zone. Twenty instances were successfully created but the other 20 requests failed.   

What is the solution for this issue and what is the root cause? 

Explanation

Amazon EC2 has a soft limit of 20 instances per region, which can be easily resolved by completing the Amazon EC2 instance request form where your use case and your instance increase will be considered. Limit increases are tied to the region they were requested for.

Option 2 is incorrect as there is no such limit in the Availability Zone. 

Option 3 is incorrect. Network Access List is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. It does not affect the creation of new EC2 instances.

Option 4 is incorrect as there is no problem with your API credentials. 

 

References:

https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2

http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

  

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 5: Skipped

A financial company instructed you to automate the recurring tasks in your department such as patch management, infrastructure selection, and data synchronization to improve their current processes. You need to have a service which can coordinate multiple AWS services into serverless workflows.   

Which of the following is the most cost-effective service to use in this scenario? 

Explanation

AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications execute, Step Functions maintains application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, your application can pick up right where it left off.

Application development is faster and more intuitive with Step Functions, because you can define and manage the workflow of your application independently from its business logic. Making changes to one does not affect the other. You can easily update and modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Functions frees your functions and containers from excess code, so your applications are faster to write, more resilient, and easier to maintain.

Option 1 is incorrect because SWF is a fully-managed state tracker and task coordinator service. It does not provide serverless orchestration to multiple AWS resources.

Option 2 is incorrect because although Lambda is used for serverless computing, it does not provide a direct way to coordinate multiple AWS services into serverless workflows.

Option 4 is incorrect because AWS Batch is primarily used to efficiently run hundreds of thousands of batch computing jobs in AWS.

Reference:

https://aws.amazon.com/step-functions/features/

Question 6: Skipped

A media company has two VPCs: VPC-1 and VPC-2 with peering connection between each other. VPC-1 only contains private subnets while VPC-2 only contains public subnets. The company uses a single AWS Direct Connect connection and a virtual interface to connect their on-premise network with VPC-1.

Which of the following options increase the fault tolerance of the connection to VPC-1? (Select all that applies.)

Explanation

In this scenario, you have two VPCs which have peering connections with each other. Note that a VPC peering connection does not support edge to edge routing. This means that if either VPC in a peering relationship has one of the following connections, you cannot extend the peering relationship to that connection:

  • -A VPN connection or an AWS Direct Connect connection to a corporate network
  • -An internet connection through an internet gateway
  • -An internet connection in a private subnet through a NAT device
  • -A VPC endpoint to an AWS service; for example, an endpoint to Amazon S3.
  • -(IPv6) A ClassicLink connection. You can enable IPv4 communication between a linked EC2-Classic instance and instances in a VPC on the other side of a VPC peering connection. However, IPv6 is not supported in EC2-Classic, so you cannot extend this connection for IPv6 communication.

 

For example, if VPC A and VPC B are peered, and VPC A has any of these connections, then instances in VPC B cannot use the connection to access resources on the other side of the connection. Similarly, resources on the other side of a connection cannot use the connection to access VPC B.

Hence, this means that you cannot use VPC-2 to extend the peering relationship that exists between VPC-1 and the on-premise network. For example, traffic from the corporate network can't directly access VPC-1 by using the VPN connection or the AWS Direct Connect connection to VPC-2, which is why Options 1, 3, and 4 are incorrect.

The correct answers are options 2 and 5. You can do the following to provide a highly available, fault-tolerant network connection:

  • -Establish a hardware VPN over the Internet between the VPC and the on-premises network.
  • -Establish another AWS Direct Connect connection and private virtual interface in the same AWS region. 

 

References:

https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html#edge-to-edge-vgw

https://aws.amazon.com/premiumsupport/knowledge-center/configure-vpn-backup-dx/

https://aws.amazon.com/answers/networking/aws-multiple-data-center-ha-network-connectivity/

 

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/

Question 7: Skipped
A health organization is using a large Dedicated EC2 instance with multiple EBS volumes to host its health records web application. The EBS volumes must be encrypted due to the confidentiality of the data that they are handling and also to comply with the HIPAA (Health Insurance Portability and Accountability Act) standard. In EBS encryption, what service does AWS use to secure the volume's data at rest? (Choose 2)

Explanation

Amazon EBS encryption offers seamless encryption of EBS data volumes, boot volumes, and snapshots, eliminating the need to build and maintain a secure key management infrastructure. EBS encryption enables data at rest security by encrypting your data using Amazon-managed keys, or keys you create and manage using the AWS Key Management Service (KMS). The encryption occurs on the servers that host EC2 instances, providing encryption of data as it moves between EC2 instances and EBS storage. Hence, options 1 and 3 are the right answers.

Options 2 and 4 are incorrect as these relate only to S3.

Option 5 is incorrect as you only store keys in CloudHSM and not passwords.

Option 6 is incorrect as ACM only provides SSL certificates and not data encryption of EBS Volumes.

 

Reference:

https://aws.amazon.com/ebs/faqs/

  

Check out this Amazon EBS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-ebs/

Question 8: Skipped

A popular mobile game uses CloudFront, Lambda, and DynamoDB for its backend services. The player data is persisted on a DynamoDB table and the static assets are distributed by CloudFront. However, there are a lot of complaints that saving and retrieving player information is taking a lot of time.

To improve the game's performance, which AWS service can you use to reduce DynamoDB response times from milliseconds to microseconds?

Explanation

Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache that can reduce Amazon DynamoDB response times from milliseconds to microseconds, even at millions of requests per second.

Option 1 is incorrect because the Amazon Elasticsearch service is a fully managed service that makes it easy for you to deploy, secure, operate, and scale your Elasticsearch engine to search, analyze, and visualize data in real-time. Although you may integrate Elasticsearch with DynamoDB, it will not reduce the DynamoDB response time from milliseconds to microseconds, even at millions of requests per second, whereas DynamoDB DAX can.

Option 2 is incorrect because AWS Device Farm is an app testing service that lets you test and interact with your Android, iOS, and web apps on many devices at once, or reproduce issues on a device in real time.

Option 3 is incorrect because DynamoDB Auto Scaling is primarily used to automate capacity management for your tables and global secondary indexes.

 
References:

https://aws.amazon.com/dynamodb/dax

https://aws.amazon.com/device-farm

 

Check out this Amazon DynamoDB Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-dynamodb/

Question 9: Skipped

A tech company has a CRM application hosted on an Auto Scaling group of On-Demand EC2 instances. The application is extensively used during office hours from 9 in the morning till 5 in the afternoon. Their users are complaining that the performance of the application is slow during the start of the day but then works normally after a couple of hours.

Which of the following can be done to ensure that the application works properly at the beginning of the day?

Explanation

Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.

 

An illustration of a basic Auto Scaling group.

 

To configure your Auto Scaling group to scale based on a schedule, you create a scheduled action. The scheduled action tells Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a scheduled scaling action, you specify the start time when the scaling action should take effect, and the new minimum, maximum, and desired sizes for the scaling action. At the specified time, Amazon EC2 Auto Scaling updates the group with the values for minimum, maximum, and desired size specified by the scaling action. You can create scheduled actions for scaling one time only or for scaling on a recurring schedule.

Option 3 is the correct answer. You need to configure a Scheduled scaling policy. This will ensure that the instances are already scaled up and ready before the start of the day since this is when the application is used the most.

Options 1 and 2 are incorrect because although this is a valid solution, it is still better to configure a Scheduled scaling policy as you already know the exact peak hours of your application. By the time either the CPU or Memory hits a peak, the application already has performance issues, so you need to ensure the scaling is done beforehand using a Scheduled scaling policy.

Option 4 is incorrect. Although the Application load balancer can also balance the traffic, it cannot increase the instances based on demand.

 

Reference:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html

 

Check out this AWS Auto Scaling Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-auto-scaling/

Question 10: Skipped

An online health record system, which provides centralized health records of all citizens, has been migrated to AWS. The system is hosted in one large EBS-backed EC2 instance which hosts both its web server and database.   

Which of the following does not happen when you stop a running EBS-backed EC2 instance? 

Explanation

All of these happen when you stop a running EBS-backed EC2 instance except for option 4. The instance retains its associated Elastic IP addresses if it is in the EC2-VPC platform and not on EC2-Classic.

When you stop a running instance, the following happens:

  • -The instance performs a normal shutdown and stops running; its status changes to stopping and then stopped.
  • -Any Amazon EBS volume remains attached to the instance, and their data persists.
  • -Any data stored in the RAM of the host computer or the instance store volumes of the host computer are gone.
  • -In most cases, the instance is migrated to a new underlying host computer when it's started.
  • -EC2-Classic: AWS releases the public and private IPv4 addresses for the instance when you stop the instance, and assign new ones when you restart it.
  • -EC2-VPC: The instance retains its private IPv4 addresses and any IPv6 addresses when stop and restart. AWS releases the public IPv4 address and assigns a new one when you restart it.
  • -EC2-Classic: AWS disassociates any Elastic IP address that's associated with the instance. You're charged for Elastic IP addresses that aren't associated with an instance. When you restart the instance, you must associate the Elastic IP address with the instance; AWS doesn't do this automatically.
  • -EC2-VPC: The instance retains its associated Elastic IP addresses. You're charged for any Elastic IP addresses associated with a stopped instance

 

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

 

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 11: Skipped
You have a web application hosted in EC2 that consumes messages from an SQS queue and is integrated with SNS to send out an email to you once the process is complete. You received 5 orders but after a few hours, you saw more than 20 email notifications in your inbox. Which of the following could be the possible culprit for this issue?

Explanation

Always remember that the messages in the SQS queue will continue to exist even after the EC2 instance has processed it, until you delete that message. You have to ensure that you delete the message after processing to prevent the message from being received and processed again once the visibility timeout expires.

There are three main parts in a distributed messaging system:

1. The components of your distributed system (EC2 instances)

2. Your queue (distributed on Amazon SQS servers)

3. Messages in the queue.

You can set up a system which has several components that send messages to the queue and receive messages from the queue. The queue redundantly stores the messages across multiple Amazon SQS servers.

 

 

Refer to the third step of the SQS Message Lifecycle:

  1. Component 1 sends Message A to a queue, and the message is distributed across the Amazon SQS servers redundantly.
  2. When Component 2 is ready to process a message, it consumes messages from the queue, and Message A is returned. While Message A is being processed, it remains in the queue and isn't returned to subsequent receive requests for the duration of the visibility timeout. 
  3. Component 2 deletes Message A from the queue to prevent the message from being received and processed again once the visibility timeout expires.

 

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-lifecycle.html

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-architecture.html

 

Check out this Amazon SQS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-sqs/

Question 12: Skipped

You are setting up a cost-effective architecture for a log processing application which has frequently accessed, throughput-intensive workloads. The application should be hosted in an On-Demand EC2 instance in your VPC.   

Which of the following is the most suitable EBS volume type to use in this scenario?   

Explanation

Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. This volume type is a good fit for large, sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable st1 volumes are not supported.

Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed to support frequently accessed data.

 

  

Option 1 is incorrect because Amazon EBS Provisioned IOPS SSD is not the most cost-effective EBS type and is primarily used for critical business applications that require sustained IOPS performance.

Option 3 is incorrect because although an Amazon EBS General Purpose SSD volume balances price and performance for a wide variety of workloads, it is not suitable for frequently accessed, throughput-intensive workloads. Throughput Optimized HDD is a more suitable option to use than General Purpose SSD.

Option 4 is incorrect because although Amazon EBS Cold HDD provides the lowest cost HDD volume compared to General Purpose SSD, it is much suitable for less frequently accessed workloads.

 

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#EBSVolumeTypes_st1

 

Check out this Amazon EBS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-ebs/

Question 13: Skipped
The game development company that you are working for has an Amazon VPC with a public subnet. It has 4 EC2 instances that are deployed in the public subnet. These 4 instances can successfully communicate with other hosts on the Internet. You launched a fifth instance in the same public subnet, using the same AMI and security group configuration that you used for the others. However, this new instance cannot be accessed from the internet unlike the other instance. What should you do to enable access to the fifth instance over the Internet?

Explanation

An Elastic IP address is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.

An Elastic IP address is a public IPv4 address, which is reachable from the Internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.

Option 1 is incorrect because it is already mentioned that your instances are in a public subnet. You only have to configure a NAT instance when your instances are on a private subnet.

Option 2 is the correct answer because you need to either add a public address or add an EIP for this EC2 instance for it to be able to access the internet.

Option 3 is incorrect because the public IP address has to be configured in the Elastic Network Interface (ENI) of the EC2 instance and not on its Operating System (OS).

Option 4 is incorrect because if the routing table was wrong then you would have an issue with the other 4 instances.

 

References: 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html

 

Question 14: Skipped

You are working as a Cloud Engineer in a leading technology consulting firm which is using a fleet of Windows-based EC2 instances with IPv4 addresses launched in a private subnet. Several software installed in the EC2 instances are required to be updated via the Internet.   

Which of the following services can provide you with a highly available solution to safely allow the instances to fetch the software patches from the Internet but prevent outside network from initiating a connection? 

Explanation

AWS offers two kinds of NAT devices — a NAT gateway or a NAT instance. It is recommended to use NAT gateways, as they provide better availability and bandwidth over NAT instances. The NAT Gateway service is also a managed service that does not require your administration efforts. A NAT instance is launched from a NAT AMI.

Just like a NAT instance, you can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.

Here is a diagram showing the differences between NAT gateway and NAT instance:

 

Option 1 is incorrect because an Egress-only Internet gateway is primarily used for VPCs that use IPv6 to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances, just like what NAT Instance and NAT Gateway do. The scenario explicitly says that the EC2 instances are using IPv4 addresses which is why Egress-only Internet gateway is invalid, even though it can provide the required high availability.

Option 2 is incorrect because a VPC endpoint simply enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an Internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Option 4 is incorrect because although a NAT instance can also enable instances in a private subnet to connect to the Internet or other AWS services and prevent the Internet from initiating a connection with those instances, it is not as highly available compared to a NAT Gateway.

 

References:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html

https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

 

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/

Question 15: Skipped
A software development company has recently invested 20 million dollars to build their own artificial intelligence APIs and AI-powered chatbots. You are hired as a Solutions Architect to build a low-cost prototype on their AWS cloud infrastructure. Which of the following combination of AWS services will provide user authentication, scalable object storage and will allow you to run your code without the need to host it in an EC2 instance?

Explanation

In this scenario, it is best to use a combination of Cognito, Lambda, and S3. Cognito will handle the user authentication; Lambda provides the serverless architecture that allows you to run your code without deploying it in an EC2 instance and finally, S3 provides a scalable object storage.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.

With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

 

References:

https://aws.amazon.com/lambda/

https://aws.amazon.com/cognito/

https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-lambda-and-tensorflow/

 

Question 16: Skipped

A startup based in Australia is deploying a new two-tier web application in AWS. The Australian company wants to store their most frequently used data in an in-memory data store to improve the retrieval and response time of their web application.   

Which of the following is the most suitable service to be used for this requirement? 

Explanation

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

 

 

Option 1 is incorrect because DynamoDB is primarily used as a NoSQL database which supports both document and key-value store models. ElastiCache is a more suitable service to use than DynamoDB, if you need an in-memory data store.

Option 2 is incorrect because RDS is mainly used as a relational database and not as a data storage for frequently used data.

Option 4 is incorrect because Redshift is a data warehouse service and is not suitable to be used as an in-memory data store.

 

References:

https://aws.amazon.com/elasticache/

https://aws.amazon.com/products/databases/

 

Check out this Amazon Elasticache Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elasticache/ 

Question 17: Skipped
There are many clients complaining that the online trading application of an investment bank is always down. Your manager instructed you to re-design the architecture of the application to prevent the unnecessary service interruptions. To ensure high availability, you set up the application to use an ELB to distribute the incoming requests across an auto-scaled group of EC2 instances in two single Availability Zones. In this scenario, what happens when an EC2 instance behind an ELB fails a health check?

Explanation

In this scenario, the load balancer will route the incoming requests only to the healthy instances. When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance. The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

There are two ways of checking the status of your EC2 instances:

1. Via the Auto Scaling group

2. Via the ELB health checks

 

The default health checks for an Auto Scaling group are EC2 status checks only. If an instance fails these status checks, the Auto Scaling group considers the instance unhealthy and replaces. If you attached one or more load balancers or target groups to your Auto Scaling group, the group does not, by default, consider an instance unhealthy and replace it if it fails the load balancer health checks.

However, you can optionally configure the Auto Scaling group to use Elastic Load Balancing health checks. This ensures that the group can determine an instance's health based on additional tests provided by the load balancer. The load balancer periodically sends pings, attempts connections, or sends requests to test the EC2 instances. These tests are called health checks.

If you configure the Auto Scaling group to use Elastic Load Balancing health checks, it considers the instance unhealthy if it fails either the EC2 status checks or the load balancer health checks. If you attach multiple load balancers to an Auto Scaling group, all of them must report that the instance is healthy in order for it to consider the instance healthy. If one load balancer reports an instance as unhealthy, the Auto Scaling group replaces the instance, even if other load balancers report it as healthy.

 

 

References:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-healthchecks.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-healthcheck.html

 

Check out this AWS Elastic Load Balancing (ELB) Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-elastic-load-balancing-elb/

 

Here is an additional training material on why an Amazon EC2 Auto Scaling group terminates a healthy instance:

Question 18: Skipped

You are working for an advertising company as their Senior Solutions Architect handling the S3 storage data. Your company has terabytes of data sitting on AWS S3 standard storage class, which accumulates significant operational costs. The management wants to cut down on the cost of their cloud infrastructure so you were instructed to switch to Glacier to lessen the cost per GB storage.   

The Amazon Glacier storage service is primarily used for which use case? (Choose 2) 

Explanation

Amazon Glacier is an extremely low-cost storage service that provides secure, durable, and flexible storage for data backup and archival. Amazon Glacier is designed to store data that is infrequently accessed. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS so that they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.

Option 1 is incorrect because storing cached session data is the main use case for ElastiCache and not Amazon Glacier.

Option 4 is incorrect because you should use RDS or DynamoDB for your active database storage as S3, in general, is used for storing your data or files.

Option 5 is incorrect because storing it for data warehousing is the main use case of Amazon Redshift. It does not meet the requirement of being able to archive your infrequently accessed data. You can use S3 standard instead for frequently accessed data or Glacier for infrequently accessed data and archiving.

It is advisable to transition the standard data to infrequent access first then transition it to Amazon Glacier. You can specify in the lifecycle rule the time it will sit in standard tier and infrequent access. You can also delete the objects after a certain amount of time.

 

In transitioning S3 standard to Glacier you need to tell S3 which objects are to be archived to the new Glacier storage option, and under what conditions. You do this by setting up a lifecycle rule using the following elements:

  • -A prefix to specify which objects in the bucket are subject to the policy.
  • -A relative or absolute time specifier and a time period for transitioning objects to Glacier. The time periods are interpreted with respect to the object’s creation date. They can be relative (migrate items that are older than a certain number of days) or absolute (migrate items on a specific date)
  • -An object age at which the object will be deleted from S3. This is measured from the original PUT of the object into the service, and the clock is not reset by a transition to Glacier.

 

You can create a lifecycle rule in the AWS Management Console.

 

Reference:

https://aws.amazon.com/glacier/faqs/

 

Check out this Amazon Glacier Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-glacier/

Question 19: Skipped
You are working for a software company that has moved a legacy application from an on-premise data center to the cloud. The legacy application requires a static IP address hard-coded into the backend, which blocks you from using an Application Load Balancer. Which steps would you take to apply high availability and fault tolerance to this application without ELB? (Choose 2)

Explanation

For this scenario, it is best to set up a self-monitoring EC2 instance with a virtual IP Address. You can use an Elastic IP and then write a custom script that checks the health of the EC2 instance and if the instance stops responding, the script will switch the Elastic IP address to a standby EC2 instance.

 

A custom script enables one Amazon Elastic Compute Cloud (EC2) instance to monitor another Amazon EC2 instance and take over a private "virtual" IP address on instance failure. When used with two instances, the script enables a High Availability scenario where instances monitor each other and take over a shared virtual IP address if the other instance fails. It could easily be modified to run on a third-party monitoring or witness server to perform the VIP swapping on behalf of the two monitored nodes.

Option 3 is incorrect because you don't have to postpone your deployment as you have the option to set up a self-monitoring EC2 instance with an EIP address.

Option 4 is incorrect as even though the Auto Scaling group provides high availability and scalability, it still depends on ELB which is not available in this scenario. Take note that you need to have a static IP address which can be in the form of an Elastic IP. Although an Auto Scaling group can scale out if one of the EC2 instances became unhealthy, you still cannot directly assign an EIP to an Auto Scaling group. In addition, you are only limited to use EC2 instance status checks for your Auto Scaling group if you do not have an ELB which can provide you the actual health check of your application (using its port), and not just the health of the EC2 instance.

Option 5 is incorrect because although this option is feasible, the goal of the company is to move the application to the cloud and not to continue using its on-premise resources. 

References:

https://aws.amazon.com/articles/leveraging-multiple-ip-addresses-for-virtual-ip-address-fail-over-in-6-simple-steps

https://aws.amazon.com/blogs/apn/amazon-vpc-for-on-premises-network-engineers-part-two/

 

Question 20: Skipped

A traffic monitoring and reporting application uses Kinesis to accept real-time data. In order to process and store the data, they used Amazon Kinesis Data Firehose to load the streaming data to various AWS resources.   

Which of the following services can you load streaming data into? 

Explanation

Amazon Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics tools. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk, enabling near real-time analytics with existing business intelligence tools and dashboards you’re already using today.

It is a fully managed service that automatically scales to match the throughput of your data and requires no ongoing administration. It can also batch, compress, and encrypt the data before loading it, minimizing the amount of storage used at the destination and increasing security.

 

 

 

Options 1 and 2 are incorrect because Amazon S3 Select is just a feature of Amazon S3. Likewise, Redshift Spectrum is also just a feature of Amazon Redshift. Although Amazon Kinesis Data Firehose can load streaming data to both Amazon S3 and Amazon Redshift, it does not directly load the data to S3 Select and Redshift Spectrum. 

S3 Select is an Amazon S3 feature that makes it easy to retrieve specific data from the contents of an object using simple SQL expressions without having to retrieve the entire object. Amazon Redshift Spectrum is a feature of Amazon Redshift that enables you to run queries against exabytes of unstructured data in Amazon S3 with no loading or ETL required.

Option 4 is incorrect because Amazon Kinesis Data Firehose cannot load streaming data to Athena.

 

Reference:

https://aws.amazon.com/kinesis/data-firehose/

 

Check out this Amazon Kinesis Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-kinesis/

Question 21: Skipped

You are leading a software development team which uses serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. You have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for your application. You instructed one of your junior developers to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT and PROD environments. 

Considering that the Lambda function is storing sensitive database and API credentials, how can you secure these information to prevent other developers in your team, or anyone, from seeing these credentials in plain text? Select the best option that provides the maximum security. 

Explanation

When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lambda function is invoked, those values are decrypted and made available to the Lambda code.

The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, if you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen. Creating your own key gives you more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect your data.

 

Reference:

https://docs.aws.amazon.com/lambda/latest/dg/env_variables.html#env_encrypt

 

Check out this AWS Lambda Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-lambda/

Question 22: Skipped
In your AWS VPC, you need to add a new subnet that will allow you to host a total of 20 EC2 instances. Which of the following IPv4 CIDR block can you use for this scenario?

Explanation

To calculate the total number of IP addresses of a given CIDR Block, you simply need to follow the 2 easy steps below. Let's say you have a CIDR block /27

1. Subtract 32 with the mask number : 

(32 - 27) = 5

2. Raise the number 2 to the power of the answer in Step #1 : 

2^ 5 = (2 * 2 * 2 * 2 * 2)

  = 32

The answer to Step #2 is the total number of IP addresses available in the given CIDR netmask. Don't forget that in AWS, the first 4 IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance. In addition, you can always associate a netmask of /27 which also has the same number of usable IP addresses (27) to help you with your exam.

Option 1 is the correct answer because the CIDR block of 172.0.0.0/27, with a netmask of /27, has an equivalent of 27 usable IP addresses. Take note that a netmask of /27 originally provides you with 32 IP addresses but in AWS, there are 5 IP addresses that are reserved which you cannot use. The first 4 IP addresses and the last IP address in each subnet CIDR block are not available in your VPC which means that you have to always subtract 5 IP addresses, hence 32 - 5 = 27. 

Option 2 is incorrect as a netmask of /28 only supports 16 IP Addresses.

Options 3 and 4 are incorrect as the only allowed block size is between a /28 netmask and /16 netmask. 

To add a CIDR block to your VPC, the following rules apply:

  • -The allowed block size is between a /28 netmask and /16 netmask.

  • -The CIDR block must not overlap with any existing CIDR block that's associated with the VPC.

  • -You cannot increase or decrease the size of an existing CIDR block.

  • -You have a limit on the number of CIDR blocks you can associate with a VPC and the number of routes you can add to a route table. You cannot associate a CIDR block if this results in you exceeding your limits.

  • -The CIDR block must not be the same or larger than the CIDR range of a route in any of the VPC route tables. For example, if you have a route with a destination of 10.0.0.0/24 to a virtual private gateway, you cannot associate a CIDR block of the same range or larger. However, you can associate a CIDR block of 10.0.0.0/25 or smaller.

  • -The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance.

 

Reference:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

 

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/

Question 23: Skipped
You have a set of linux servers running on multiple On-Demand EC2 Instances. The Audit team wants to collect and process the application log files generated from these servers for their report. Which of the following services is the best to use in this case?

Explanation

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. Additionally, you can use Amazon EMR to transform and move large amounts of data into and out of other AWS data stores and databases such as Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB.

Option 2 is wrong as Amazon Glacier is used for data archive only.

Option 3 is wrong as an EC2 instance is not a recommended storage service. In addition, Amazon EC2 does not have a built-in data processing engine to process large amounts of data.

Option 4 is wrong as Amazon RedShift is mainly used as a data warehouse service.

 

Reference:

http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html

 

Check out this Amazon EMR Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-emr/

 

Here is an in-depth tutorial on Amazon EMR:

Question 24: Skipped
Your team is planning to migrate a web application from your on-premise infrastructure to AWS cloud. Your team lead wants to ensure that even though the application will be in AWS, you can still manage the service and implement ongoing maintenance of packages. Which of the following AWS services can you use, which allows access to its underlying infrastructure? (Choose 2)

Explanation

You can connect and manage the EC2 instance so Option 2 is correct. You can install new packages and perform changes on the underlying infrastructure of the EC2 instance such as Enhanced Networking, Encryption, and so forth.

Elastic Beanstalk is a service that allows you to quickly deploy and manage your application in AWS. What it does is to automatically create EC2 instances for your application, which you can also manage just like a regular instance. Hence, Option 1 is correct.

 

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

 

Check out this AWS Elastic Beanstalk Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-elastic-beanstalk/

Question 25: Skipped
You are a Solutions Architect working with a company that uses Chef Configuration management in their data center. Which service is designed to let the customer leverage existing Chef recipes in AWS?

Explanation

AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. OpsWorks has three offerings - AWS Opsworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks.

 

References: 

https://aws.amazon.com/opsworks/

 

Question 26: Skipped
You have a new joiner in your orgranization. You had provisioned an IAM user for the new employee in AWS however, the user is not able to perform any actions. What could be the reason for this?

Explanation

The reason for this issue is that IAM users are created with no permissions by default. That means that when you created the new IAM user, you might not provisioned any permissions to the user. Hence, option 3 is correct and conversely, options 1 and 2 are wrong.

Option 4 is incorrect because provisions are applied immediately, and not after 24 hours.

The IAM user might need to make API calls or use the AWS CLI or the Tools for Windows PowerShell. In that case, create an access key (an access key ID and a secret access key) for that user. This is called Programmatic access.

If the user needs to access AWS resources from the AWS Management Console, create a password and provide it to the user.

 

References:

https://aws.amazon.com/iam/details/manage-users/

Question 27: Skipped

You have a static corporate website hosted in a standard S3 bucket and a new web domain name which was registered using Route 53. You are instructed by your manager to integrate these two services in order to successfully launch their corporate website. What are the prerequisites when routing traffic using Amazon Route 53 to a website that is hosted in an Amazon S3 Bucket? (Choose 2) 

Explanation

Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket:

  • -An S3 bucket that is configured to host a static website. The bucket must have the same name as your domain or subdomain. For example, if you want to use the subdomain acme.example.com, the name of the bucket must be acme.example.com.
  • -A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar.
  • -Route 53 as the DNS service for the domain. If you register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain.

 

Reference:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/RoutingToS3Bucket.html

 

Check out this Amazon Route 53 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-route-53/

Question 28: Skipped

You have a web application deployed in AWS which is currently running in the eu-central-1 region. You have an Auto Scaling group of On-Demand EC2 instances which are using pre-built AMIs. Your manager instructed you to implement disaster recovery for your system so in the event that the application goes down in the eu-central-1 region, a new instance can be started in the us-west-2 region. 

As part of your disaster recovery plan, which of the following should you take into consideration? 

Explanation

In this scenario, the EC2 instances you are currently using depends on a pre-built AMI. This AMI is not accessible to another region hence,  you have to copy it to the us-west-2 region to properly establish your disaster recovery instance.

You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy encrypted AMIs and AMIs with encrypted snapshots.

 

Options 1 and 3 are incorrect because the AMI does not have a Network Access Control nor a Share functionality.

Option 4 is incorrect as you can use a unique or pre-built AMI to a specific region only.

 

References: 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

Question 29: Skipped

You are a Solutions Architect of a multi-national gaming company which develops video games for PS4, Xbox One and Nintendo Switch consoles, plus a number of mobile games for Android and iOS. Due to the wide range of their products and services, you proposed that they use API Gateway.   

What are the key features of API Gateway that you can tell your client? (Choose 2)   

Explanation

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. With a few clicks in the AWS Management Console, you can create an API that acts as a “front door” for applications to access data, business logic, or functionality from your back-end services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, or any web application. Since it can use AWS Lambda, you can run your APIs without servers.

Amazon API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. Amazon API Gateway has no minimum fees or startup costs. You pay only for the API calls you receive and the amount of data transferred out.

 

Reference:

https://aws.amazon.com/api-gateway/

 

Check out this Amazon API Gateway Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-api-gateway/

Question 30: Skipped

An online stock trading portal is deployed in AWS and in order to complete the set up, you need to offload the SSL/TLS processing for your web servers using CloudHSM. This will reduce the burden on your web servers and provides extra security by storing your web server's private key in this cloud-based hardware security module.   

Which of the following statements is not true about Amazon CloudHSM? 

Explanation

Take note that CloudHSM provides a secure key storage in tamper-resistant hardware available in multiple Availability Zones (AZs) and not just on one AZ. Hence, Option 4 is the incorrect answer.

AWS CloudHSM runs in your own Amazon Virtual Private Cloud (VPC), enabling you to easily use your HSMs with applications running on your Amazon EC2 instances. With CloudHSM, you can use standard VPC security controls to manage access to your HSMs.

Your applications connect to your HSMs using mutually authenticated SSL channels established by your HSM client software. Since your HSMs are located in Amazon datacenters near your EC2 instances, you can reduce the network latency between your applications and HSMs versus an on-premises HSM.

  • AWS manages the hardware security module (HSM) appliance but does not have access to your keys
  • You control and manage your own keys
  • Application performance improves (due to close proximity with AWS workloads)
  • Secure key storage in tamper-resistant hardware available in multiple Availability Zones (AZs)
  • Your HSMs are in your Virtual Private Cloud (VPC) and isolated from other AWS networks.

Separation of duties and role-based access control is inherent in the design of the AWS CloudHSM. AWS monitors the health and network availability of your HSMs but is not involved in the creation and management of the key material stored within your HSMs. You control the HSMs and the generation and use of your encryption keys.

Reference:

https://aws.amazon.com/cloudhsm/

Question 31: Skipped
You want to establish an SSH connection to a Linux instance hosted in your VPC via the Internet. Which of the following is not required in order for this to work?

Explanation

Since you need to connect to your EC2 instance via the Internet, you basically need to ensure that your VPC has an attached Internet Gateway so it can communicate with the outside world. Your instance should also have either public IP or Elastic IP address. In this scenario, you don't need a Secondary Private IP Address since it is only used inside your VPC.

To enable access to or from the internet for instances in a VPC subnet, you must do the following:

  • -Attach an internet gateway to your VPC.
  • -Ensure that your subnet's route table points to the internet gateway.
  • -Ensure that instances in your subnet have a globally unique IP address (public IPv4 address, Elastic IP address, or IPv6 address).
  • -Ensure that your network access control and security group rules allow the relevant traffic to flow to and from your instance.

 

References:

https://aws.amazon.com/premiumsupport/knowledge-center/secondary-private-ip-address/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html/

  

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 32: Skipped
To save cost, a company decided to change their third-party data analytics tool to a cheaper solution. They sent a full data export on a CSV file which contains all of their analytics information. You then save the CSV file to an S3 bucket for storage. Your manager asked you to do some validation on the provided data export.In this scenario, what is the most cost-effective and easiest way to analyze export data using a standard SQL?

Explanation

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds.

Athena is serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—executing queries in parallel—so results are fast, even with large datasets and complex queries.

Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena.

Hence, the most cost-effective and appropriate answer in this scenario is Option 3: Using AWS Athena. 

Options 1, 2 and 4 are all incorrect because it is not necessary to set up a database to be able to analyze the CSV export file. You can use a cost-effective option (AWS Athena), which is a serverless service that enables you to pay only for the queries you run. 

 

Reference: 

https://docs.aws.amazon.com/athena/latest/ug/what-is.html

 

Check out this Amazon Athena Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-athena/

Question 33: Skipped

You are an IT Consultant for a top investment bank which is in the process of building its new Forex trading platform. To ensure high availability and scalability, you designed the trading platform to use an Elastic Load Balancer in front of an Auto Scaling group of On-Demand EC2 instances across multiple Availability Zones. For its database tier, you chose to use a single Amazon Aurora instance to take advantage of its distributed, fault-tolerant and self-healing storage system. 

In the event of system failure on the primary database instance, what happens to Amazon Aurora during the failover? 

Explanation

Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual administrative intervention.

If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds.

If you do not have an Amazon Aurora Replica (i.e. single instance), Aurora will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in a different Availability Zone. From start to finish, failover typically completes in under 15 minutes.

Hence, the correct answer is Option 2.

Options 1 and 3 are incorrect because this will only happen if you are using an Amazon Aurora Replica. In addition, Amazon Aurora flips the canonical name record (CNAME) and not the A record (IP address) of the instance.

Option 4 is incorrect because Aurora will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in a different Availability Zone and not the other way around.

 

Reference:

https://aws.amazon.com/rds/aurora/faqs/

 

Check out this Amazon Aurora Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-aurora/

Question 34: Skipped

You are working for a commercial bank as an AWS Infrastructure Engineer handling the forex trading application of the bank. You have an Auto Scaling group of EC2 instances that allow your company to cope up with the current demand of traffic and achieve cost-efficiency. You want the Auto Scaling group to behave in such a way that it will follow a predefined set of parameters before it scales down the number of EC2 instances, which protects your system from unintended slowdown or unavailability.   

Which of the following statements are true regarding the cooldown period? (Select all that applies)

Explanation

In Auto Scaling, the following statements are correct regarding the cooldown period:

  1. It ensures that the Auto Scaling group does not launch or terminate additional EC2 instances before the previous scaling activity takes effect.
  2. Its default value is 300 seconds.
  3. It is a configurable setting for your Auto Scaling group.

Options 1, 2, and 5 are incorrect as these statements are false in depicting what the word "cooldown" actually means for Auto Scaling. The cooldown period is a configurable setting for your Auto Scaling group that helps to ensure that it doesn't launch or terminate additional instances before the previous scaling activity takes effect. After the Auto Scaling group dynamically scales using a simple scaling policy, it waits for the cooldown period to complete before resuming scaling activities.

The figure below demonstrates the scaling cooldown:

Reference: 

http://docs.aws.amazon.com/autoscaling/latest/userguide/as-instance-termination.html

 

Check out this AWS Auto Scaling Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-auto-scaling/

Question 35: Skipped

You are working as a Solutions Architect for a leading commercial bank which has recently adopted a hybrid cloud architecture. You have to ensure that the required data security is in place on all of their AWS resources to meet the strict financial regulatory requirements.   

In the AWS Shared Responsibility Model, which security aspects are the responsibilities of the customer? (Choose 2) 

Explanation

Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer assumes responsibility and management of the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS provided security group firewall.

Customers should carefully consider the services they choose as their responsibilities vary depending on the services used, the integration of those services into their IT environment, and applicable laws and regulations. The nature of this shared responsibility also provides the flexibility and customer control that permits the deployment. This differentiation of responsibility is commonly referred to as Security “of” the Cloud versus Security “in” the Cloud.

The shared responsibility model for infrastructure services, such as Amazon Elastic Compute Cloud (Amazon EC2) for example, specifies that AWS manages the security of the following assets:

  • -Facilities
  • -Physical security of hardware
  • -Network infrastructure
  • -Virtualization infrastructure 

 

You as the customer are responsible for the security of the following assets:

  • -Amazon Machine Images (AMIs)
  • -Operating systems
  • -Applications
  • -Data in transit
  • -Data at rest
  • -Data stores
  • -Credentials
  • -Policies and configuration

 

For a better understanding about this topic, refer to the AWS Security Best Practices whitepaper on the reference link below and also the Shared Responsibility Model diagram:

 

References: 

https://d0.awsstatic.com/whitepapers/aws-security-best-practices.pdf

https://aws.amazon.com/compliance/shared-responsibility-model/

Question 36: Skipped

You are a newly-hired Solutions Architect in a leading utilities provider, which is in the process of migrating their applications to AWS. You created an EBS-Backed EC2 instance with ephemeral0 and ephemeral1 instance store volumes attached to host a web application that fetches and stores data from a web API service.   

If this instance is stopped, what will happen to the data on the ephemeral store volumes?   

Explanation

The virtual devices for instance store volumes are named as ephemeral[0-23]. Instance types that support one instance store volume have ephemeral0. Instance types that support two instance store volumes have ephemeral0 and ephemeral1, and so on until ephemeral23 

The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:

  • -The underlying disk drive fails
  • -The instance stops
  • -The instance terminates

 

The word ephemeral means short-lived or temporary in the English dictionary. Hence, when you see this word in AWS, always consider this as just a temporary memory or a short-lived storage. 

 

Reference: 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html?shortFooter=true#instance-store-lifetime

 

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 37: Skipped

AWS hosts a variety of public datasets such as satellite imagery, geospatial, or genomic data that you want to use for your web application hosted in Amazon EC2.   

If you use these datasets, how much will it cost you? 

Explanation

AWS hosts a variety of public datasets that anyone can access for free.

Previously, large datasets such as satellite imagery or genomic data have required hours or days to locate, download, customize, and analyze. When data is made publicly available on AWS, anyone can analyze any volume of data without needing to download or store it themselves. 

 

References:

https://aws.amazon.com/public-datasets/

Question 38: Skipped
You are a Solutions Architect of a bank, designing various CloudFormation templates for a new online trading platform that your department will build.How much does it cost to use CloudFormation templates?

Explanation

There is no additional charge for AWS CloudFormation. You only pay for the AWS resources that are created (e.g. Amazon EC2 instances, Elastic Load Balancing load balancers, etc.)

 

Reference:

https://aws.amazon.com/cloudformation/faqs/

  

Check out this AWS CloudFormation Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-cloudformation/

Question 39: Skipped
An application is deployed in a fleet of Spot EC2 instances and uses a MySQL RDS database instance. Currently, there is only one RDS instance running in one Availability Zone. You plan to improve the database to ensure high availability and scalability by synchronous data replication to another RDS instance. Which of the following performs synchronous data replication in RDS?

Explanation

When you create or modify your DB instance to run as a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. Updates to your DB Instance are synchronously replicated across Availability Zones to the standby in order to keep both in sync and protect your latest database updates against DB instance failure. 

Option 2 is incorrect as a Read Replica provides an asynchronous replication instead of synchronous. In addition, a Read Replica is only available in Aurora, MySQL, MariaDB, and PostgreSQL database engines. 

Options 3 and 4 are wrong answers as both DynamoDB and CloudFront do not have a Read Replica feature.

 

Reference:

https://aws.amazon.com/rds/details/multi-az/

 

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/

Question 40: Skipped

A company is using Redshift for its online analytical processing (OLAP) application which processes complex queries against large datasets. There is a requirement in which you have to define the number of query queues that are available and how queries are routed to those queues for processing.   

Which of the following will you use to meet this requirement? 

Explanation

In Amazon Redshift, you use workload management (WLM) to define the number of query queues that are available, and how queries are routed to those queues for processing. WLM is part of parameter group configuration. A cluster uses the WLM configuration that is specified in its associated parameter group.

When you create a parameter group, the default WLM configuration contains one queue that can run up to five queries concurrently. You can add additional queues and configure WLM properties in each of them if you want more control over query processing. Each queue that you add has the same default WLM configuration until you configure its properties. When you add additional queues, the last queue in the configuration is the default queue. Unless a query is routed to another queue based on criteria in the WLM configuration, it is processed by the default queue. You cannot specify user groups or query groups for the default queue.

As with other parameters, you cannot modify the WLM configuration in the default parameter group. Clusters associated with the default parameter group always use the default WLM configuration. If you want to modify the WLM configuration, you must create a parameter group and then associate that parameter group with any clusters that require your custom WLM configuration.

 

Reference:

https://docs.aws.amazon.com/redshift/latest/mgmt/workload-mgmt-config.html

 

Check out this Amazon Redshift Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-redshift/

Question 41: Skipped

You have launched a travel photo sharing website using Amazon S3 to serve high-quality photos to visitors of your website. After a few days, you found out that there are other travel websites linking and using your photos. This resulted in financial losses for your business.   

What is an effective method to mitigate this issue? 

Explanation

In Amazon S3, all objects by default are private. Only the object owner has permission to access these objects. However, the object owner can optionally share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.

When you create a pre-signed URL for your object, you must provide your security credentials, specify a bucket name, an object key, specify the HTTP method (GET to download the object) and expiration date and time. The pre-signed URLs are valid only for the specified duration.

Anyone who receives the pre-signed URL can then access the object. For example, if you have a video in your bucket and both the bucket and the object are private, you can share the video with others by generating a pre-signed URL.

 

Reference: 

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html

 

Check out this Amazon CloudFront Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-cloudfront/

Question 42: Skipped

There was an incident in your production environment where the user data stored in the S3 bucket has been accidentally deleted by one of the Junior DevOps Engineers. The issue was escalated to your manager and after a few days, you were instructed to improve the security and protection of your AWS resources.   

What combination of the following options will protect the S3 objects in your bucket from both accidental deletion and overwriting? (Choose 2) 

Explanation

By using Versioning and enabling MFA (Multi-Factor Authentication) Delete, you can secure and recover your S3 objects from accidental deletion or overwrite. 

Versioning is a means of keeping multiple variants of an object in the same bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

You can also optionally add another layer of security by configuring a bucket to enable MFA (Multi-Factor Authentication) Delete, which requires additional authentication for either of the following operations:

  • Change the versioning state of your bucket
  • Permanently delete an object version

 

MFA Delete requires two forms of authentication together:

  • Your security credentials
  • The concatenation of a valid serial number, a space, and the six-digit code displayed on an approved authentication device

 

Reference: 

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

 

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/

Question 43: Skipped

A tech company that you are working for has undertaken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for storage of their personal documents.   

Which of the following will you need to consider so you can set up a solution that incorporates single sign-on feature from your corporate AD or LDAP directory and also restricts access for each individual user to a designated user folder in an S3 bucket? (Choose 2) 

Explanation

The question refers to one of the common scenarios for temporary credentials in AWS. Temporary credentials are useful in scenarios that involve identity federation, delegation, cross-account access, and IAM roles. In this example, it is called enterprise identity federation considering that you also need to set up a single sign-on (SSO) capability.

The correct answers are:

  • -Setup a Federation proxy or an Identity provider
  • -Setup an AWS Security Token Service to generate temporary tokens
  • -Configure an IAM role 

 

 

In an enterprise identity federation, you can authenticate users in your organization's network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory. You can also use SAML 2.0 to manage your own solution for federating user identities.

 

Reference:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html

 

Check out this AWS IAM Cheat Sheet: 

https://tutorialsdojo.com/aws-cheat-sheet-aws-identity-and-access-management-iam/

Question 44: Skipped
You have a web-based order processing system which is currently using a queue in Amazon SQS. The support team noticed that there are a lot of cases where an order was processed twice. This issue has caused a lot of trouble in your processing and made your customers very unhappy. Your IT Manager has asked you to ensure that this issue does not happen again. What can you do to prevent this from happening again in the future?

Explanation

The main issue here is that the order management system produces duplicate orders at times. Since the company is using SQS, there is a possibility that a message can have a duplicate in case an EC2 instance failed to delete the already processed message. To prevent this issue from happening, you have to use Amazon Simple Workflow service instead of SQS.

For standard queues, the visibility timeout isn't a guarantee against receiving a message twice. Hence, Option 2 is incorrect. To avoid duplicate SQS messages, it is better to design your applications to be idempotent (they should not be affected adversely when processing the same message more than once).

Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud. If your app's steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails.

 

References: 

https://aws.amazon.com/swf/

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html

   

Check out this Amazon SQS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-sqs/

Question 45: Skipped

You have one security group associated with 10 On-Demand EC2 instances. You configured the security group to allow all inbound SSH traffic and then right after that, you created two new EC2 instances in the same security group.   

When will the changes be applied to the EC2 instances?   

Explanation

Changes made in a Security Group is immediately implemented to all associated EC2 instances.

security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.

 

References:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

Question 46: Skipped

Your company is launching a new web portal for its clients. It needs to be launched to a new VPC and will be composed of web servers that will host the UI app and the REST API services, including two database servers. The web portal will be accessed by the clients through the Internet.

In this scenario, which of the VPC configuration wizard options would you use?

Explanation

Since the web portal consists of both web and database servers, it is best to launch the web servers into the public subnet and the database server into the private subnet. Hence, Option 2 is the right answer.

Although you can use a single public subnet for your web and database servers, it will be a massive security risk as you are exposing your database publicly to the Internet.

 

Reference:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html 

 

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/

Question 47: Skipped

A startup is in a hurry to build an API for their mobile app to compete with their rival company. Based on their technical requirements, you recommended to build a serverless architecture instead of typically hosting the API in an EC2 instance.   

Which of the following AWS Services can you use to build and run serverless applications? (Choose 2) 

Explanation

AWS provides a set of fully managed services such as Lambda, API Gateway, DynamoDB, and many others that you can use to build and run serverless applications. Serverless applications don't require provisioning, maintaining, and administering servers for backend components such as compute, databases, storage, stream processing, message queueing, and more.

 

 

You also no longer need to worry about ensuring application fault tolerance and availability. Instead, AWS handles all of these capabilities for you. This allows you to focus on product innovation while enjoying faster time-to-market.

Option 3 is incorrect since ECS is a container orchestration service that supports Docker containers for you to run containerized services. It is not serverless since you will still be managing the docker containers yourself.

Option 4 is incorrect since EC2 is not serverless, even if you purchased reserved instances of them.

Option 5 is incorrect since SWF is a state tracker and task coordinator in the Cloud.

 

Reference:

https://aws.amazon.com/serverless/

 

Check out these AWS Lambda, Amazon DynamoDB and API Gateway Cheat Sheets:

https://tutorialsdojo.com/aws-cheat-sheet-aws-lambda/

https://tutorialsdojo.com/aws-cheat-sheet-amazon-dynamodb/

https://tutorialsdojo.com/aws-cheat-sheet-amazon-api-gateway/

 

Here is an in-depth tutorial on serverless applications:

Question 48: Skipped
You have started your new role as a Solutions Architect for a media company. They host large volumes of data for their operations which are about 250 TB in size on their internal servers. They have decided to store this data on S3 because of its durability and redundancy. The company currently has a 100 Mbps dedicated line connecting their head office to the Internet. What is the fastest way to import all this data to Amazon S3?

Explanation

Amazon Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

 

Reference:

https://aws.amazon.com/snowball/

 

Check out this AWS Snowball Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-snowball/ 

Question 49: Skipped

As a Network Architect developing a food ordering application, you need to retrieve the instance ID, public keys, and public IP address of the EC2 server you made for tagging and grouping the attributes into your internal application running on-premises. Which EC2 feature will help you achieve your requirements?

Explanation

Instance metadata is the data about your instance that you can use to configure or manage the running instance. You can get the instance ID, public keys, public IP address and many other information from the instance metadata by firing a URL command in your instance to this URL:

http://169.254.169.254/latest/meta-data/

Option 1 is incorrect because the instance user data is mainly used to perform common automated configuration tasks and run scripts after the instance starts.

Option 2 is incorrect because resource tags are labels that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define.

Option 4 is incorrect because Amazon Machine Image (AMI) mainly provides the information required to launch an instance, which is a virtual server in the cloud.

 

Reference:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.htm

  

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 50: Skipped

A start-up company that offers an automated transcription service has consulted you about their AWS architecture. They have a fleet of Amazon EC2 worker instances that process an uploaded audio file and then generate a text file as an output. You must store both of the uploaded audio and generated text file in the same durable storage until the user has downloaded them. The number of files to be stored can grow over time as the start-up company is expanding rapidly overseas.

Which of the following storage option should you use for this scenario, which is both cost-efficient and scalable?

Explanation

Amazon S3 offers a highly durable, scalable, and secure destination for backing up and archiving your critical data. This is the correct option as the start-up company is looking for a durable storage to store the audio and text files.

  • Option 1 is incorrect as AmazonRedshift is usually used as a Data Warehouse.
  • Option 2 is incorrect as Amazon Glacier is usually used for data archives.
  • Option 4 is incorrect as data stored in an instance store is not durable.

 

References:

https://aws.amazon.com/s3/

Question 51: Skipped

You are building a transcription service for a company in which a fleet of EC2 worker instances process an uploaded  audio file and generate a text file as an output. You must store both of these files in the same durable  storage until the text file is retrieved by the uploader. Due to an expected surge in demand, you have to ensure that the storage is scalable. 

Which storage option in AWS can you use in this situation, which is both cost-efficient and scalable? 

Explanation

In this scenario, the best option is to use Amazon S3. It’s a simple storage service that offers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.

Options 1 and 4 are incorrect because these services do not provide durable storage.

Option 2 is incorrect because Amazon Glacier is mainly used for data archives with data retrieval times that can take some few hours. Hence, it is not suitable for the transcription service where the data are stored and frequently accessed.

 

Reference:

https://aws.amazon.com/s3/faqs/

 

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/

Question 52: Skipped

A WordPress website hosted in an EC2 instance, which has an additional EBS volume attached, was mistakenly deployed in the us-east-1a Availability Zone due to a misconfiguration in your CloudFormation template. There is a requirement to quickly rectify the issue by moving and attaching the EBS volume to a new EC2 instance in the us-east-1b Availability Zone.   As the Solutions Architect of the company, which of the following should you do to solve this issue? 

Explanation

The first step is to create a snapshot of the EBS volume. Create a volume using this snapshot and then specify the new Availability Zone accordingly.

 

 

A point-in-time snapshot of an EBS volume, can be used as a baseline for new volumes or for data backup. If you make periodic snapshots of a volume, the snapshots are incremental—only the blocks on the device that have changed after your last snapshot are saved in the new snapshot. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the entire volume.

Snapshots occur asynchronously; the point-in-time snapshot is created immediately, but the status of the snapshot is pending until the snapshot is complete (when all of the modified blocks have been transferred to Amazon S3), which can take several hours for large initial snapshots or subsequent snapshots where many blocks have changed. While it is completing, an in-progress snapshot is not affected by ongoing reads and writes to the volume.

 

References:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html

   

Check out this Amazon EBS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-ebs/

Question 53: Skipped
A telecommunications company is planning to give AWS Console access to developers. Company policy mandates the use of identity federation and role-based access control. Currently, the roles are already assigned using groups in the corporate Active Directory. In this scenario, what combination of the following services can provide developers access to the AWS console? (Choose 2)

Explanation

Considering that the company is using a corporate Active Directory, it is best to use AWS Directory Service AD Connector for easier integration. In addition, since the roles are already assigned using groups in the corporate Active Directory, it would be better to also use IAM Roles. Take note that you can assign an IAM Role to the users or groups from your Active Directory once it is integrated with your VPC via the AWS Directory Service AD Connector.

 

 

AWS Directory Service provides multiple ways to use Amazon Cloud Directory and Microsoft Active Directory (AD) with other AWS services. Directories store information about users, groups, and devices, and administrators use them to manage access to information and resources. AWS Directory Service provides multiple directory choices for customers who want to use existing Microsoft AD or Lightweight Directory Access Protocol (LDAP)–aware applications in the cloud. It also offers those same choices to developers who need a directory to manage users, groups, devices, and access.

 

Reference: 

https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-aws-using-ad-connector/

 

Check out this AWS IAM Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-identity-and-access-management-iam/

 

Here is a video tutorial on AWS Directory Service:

Question 54: Skipped

You are working as a Solutions Architect in a startup company which has a project that requires a notification service. You are planning to use Amazon SNS as it uses a publish/subscribe model for push delivery of messages.   

What are the different delivery formats or transports available for receiving notifications from this service? (Choose 2) 

Explanation

Amazon SNS supports notifications over multiple transport protocols in order for customers to have broad flexibility of delivery mechanisms. Customers can select one the following transports as part of the subscription requests:

  • -HTTP, HTTPS – Subscribers specify a URL as part of the subscription registration; notifications will be delivered through an HTTP POST to the specified URL.
  • -Email, Email-JSON – Messages are sent to registered addresses as email. Email-JSON sends notifications as a JSON object, while Email sends text-based email.
  • -SQS – Users can specify an SQS standard queue as the endpoint; Amazon SNS will enqueue a notification message to the specified queue (which subscribers can then process using SQS APIs such as ReceiveMessage, DeleteMessage, etc.). Note that FIFO queues are not currently supported.
  • -SMS – Messages are sent to registered phone numbers as SMS text messages.

 

Reference:

https://aws.amazon.com/sns/faqs/

 

Check out this Amazon SNS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-sns/

Question 55: Skipped

You recently created a brand new IAM User with a default setting using AWS CLI. This is intended to be used to send API requests to your S3, DynamoDB, Lambda, and other AWS resources of your cloud infrastructure.   

Which of the following must be done to allow the user to make API calls to your AWS resources? 

Explanation

You can choose the credentials that are right for your IAM user. When you use the AWS Management Console to create a user, you must choose to at least include a console password or access keys. By default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. You must create the type of credentials for an IAM user based on the needs of your user.

Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Users need their own access keys to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI), Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for individual AWS services.

To fill this need, you can create, modify, view, or rotate access keys (access key IDs and secret access keys) for IAM users. When you create an access key, IAM returns the access key ID and secret access key. You should save these in a secure location and give them to the user.

 

 

 

Option 1 is incorrect because by default, a brand new IAM user created using the AWS CLI or AWS API has no credentials of any kind. Take note that in the scenario, you created the new IAM user using the AWS CLI and not via the AWS Management Console, where you must choose to at least include a console password or access keys when creating a new IAM user.

Option 2 is incorrect because enabling Multi-Factor Authentication for the IAM user will still not provide the required Access Keys needed to send API calls to your AWS resources. You have to grant the IAM user with Access Keys to meet the requirement.

Option 3 is incorrect because adding a new IAM policy to the new user will not grant the needed Access Keys needed to make API calls to the AWS resources.

  

References:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html#id_users_creds 

 

Check out this AWS IAM Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-identity-and-access-management-iam/

Question 56: Skipped

Your fellow AWS Engineer has created a new Standard-class S3 bucket to store financial reports that are not frequently accessed but should be immediately available when an auditor requests for it. To save costs, you changed the storage class of the S3 bucket from Standard to Infrequent Access storage class.   

In Amazon S3 Standard - Infrequent Access storage class, which of the following statements are true? (Choose 2) 

Explanation

Amazon S3 Standard - Infrequent Access (Standard - IA) is an Amazon S3 storage class for data that is accessed less frequently, but requires rapid access when needed. Standard - IA offers the high durability, throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee.

This combination of low cost and high performance make Standard - IA ideal for long-term storage, backups, and as a data store for disaster recovery. The Standard - IA storage class is set at the object level and can exist in the same bucket as Standard, allowing you to use lifecycle policies to automatically transition objects between storage classes without any application changes.

Key Features:

  • -Same low latency and high throughput performance of Standard
  • -Designed for durability of 99.999999999% of objects
  • -Designed for 99.9% availability over a given year
  • -Backed with the Amazon S3 Service Level Agreement for availability
  • -Supports SSL encryption of data in transit and at rest
  • -Lifecycle management for automatic migration of objects

 

The option: "It provides high latency and low throughput performance" is wrong as it should be "low latency" and "high throughput" instead.

The option: "It is the best storage option to store noncritical and reproducible data" is wrong as it actually refers to Amazon S3 - Reduced Redundancy Storage (RRS). 

The option: "Ideal to use for data archiving." is wrong because this statement refers to Amazon Glacier.

 

Reference:

https://aws.amazon.com/s3/storage-classes/

 

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/

Question 57: Skipped

You were hired as an IT Consultant in a startup cryptocurrency company that wants to go global with their international money transfer app. Your project is to make sure that the database of the app is highly available on multiple regions.   

What are the benefits of adding Multi-AZ deployments in Amazon RDS? (Choose 2)

Explanation

The correct answers are options 1 & 4:

  • Increased database availability in the case of system upgrades like OS patching or DB Instance scaling.
  • It makes the database fault-tolerant to an Availability Zone failure

 

Option 3 is almost correct. RDS synchronously replicates the data to a standby instance in a different Availability Zone (AZ) that is in the same region and not in a different one.

Options 2 and 5 are incorrect as it does not affect the performance nor provide SQL optimization.

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.

In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.

 

Reference:

https://aws.amazon.com/rds/details/multi-az/

 

Check out this Amazon RDS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/

Question 58: Skipped

You are tasked to host a web application in a new VPC with private and public subnets. In order to do this, you will need to deploy a new MySQL database server and a fleet of EC2 instances to host the application. In which subnet should you launch the new database server into? 

Explanation

In an ideal and secure VPC architecture, you launch the web servers or elastic load balancers in the public subnet and the database servers in the private subnet. If you launch your database server in the public subnet, it will be publicly accessible all over the Internet which has a higher security risk. Hence, it is better to launch your database in the private subnet.

 

Reference:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

 

Check out this Amazon VPC Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/

Question 59: Skipped
You are a Solutions Architect in your company working with 3 DevOps Engineers under you. One of the engineers accidentally deleted a file hosted in Amazon S3 which has caused disruption of service. What can you do to prevent this from happening again?

Explanation

To avoid accidental deletion in Amazon S3 bucket, you can:

  • -Enable Versioning
  • -Enable MFA (Multi-Factor Authentication) Delete

 

Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

If the MFA (Multi-Factor Authentication) Delete is enabled, it requires additional authentication for either of the following operations.

  • -Change the versioning state of your bucket
  • -Permanently delete an object version

 

Reference:

http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

 

Check out this Amazon S3 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/

Question 60: Skipped

You are working as a Solutions Architect for a start-up company that has a not-for-profit crowdfunding platform hosted in AWS. Their platform allows people around the globe to raise money for social enterprise projects including challenging circumstances like accidents and illnesses. Since the system handles financial transactions, you have to ensure that your cloud architecture is secure. Which of the following AWS services encrypts data at rest by default? (Choose 2)

Explanation

All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL. By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3). Also, when using the file gateway, you can optionally configure each file share to have your objects encrypted with AWS KMS-Managed Keys using SSE-KMS.

Data stored in Amazon Glacier is protected by default; only vault owners have access to the Amazon Glacier resources they create. Amazon Glacier encrypts your data at rest by default and supports secure data transit with SSL.

 

References:

https://aws.amazon.com/storagegateway/faqs/ 

https://aws.amazon.com/glacier/features

Question 61: Skipped
As a Solutions Architect, you have been requested to set up a highly decoupled application in AWS. Which of the following can help you accomplish this goal?

Explanation

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Building applications from individual components that each perform a discrete function improves scalability and reliability, and is best practice design for modern applications. SQS makes it simple and cost-effective to decouple and coordinate the components of a cloud application. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.

 

Reference:

https://aws.amazon.com/sqs/ 

 

Check out this Amazon SQS Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-sqs/

Question 62: Skipped

As the Solutions Architect, you have built a photo-sharing site for an entertainment company. The site was hosted using 3 EC2 instances in a single availability zone with a Classic Load Balancer in front to evenly distribute the incoming load.   

What should you do to enable your Classic Load Balancer to bind a user's session to a specific instance? 

Explanation

By default, a Classic Load Balancer routes each request independently to the registered instance with the smallest load. However, you can use the sticky session feature (also known as session affinity), which enables the load balancer to bind a user's session to a specific instance. This ensures that all requests from the user during the session are sent to the same instance.

The key to managing sticky sessions is to determine how long your load balancer should consistently route the user's request to the same instance. If your application has its own session cookie, then you can configure Elastic Load Balancing so that the session cookie follows the duration specified. If your application does not have its own session cookie, then you can configure Elastic Load Balancing to create a session cookie by specifying your own stickiness duration.

 

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html 

 

Check out this AWS Elastic Load Balancing (ELB) Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-aws-elastic-load-balancing-elb/

Question 63: Skipped
The IT Operations team of your company wants to retrieve all of the Public IP addresses assigned to a running EC2 instance via the Instance metadata. Which of the following URLs will you use?

Explanation

http://169.254.169.254/latest/meta-data/ is the URL that you can use to retrieve the Instance Metadata of your EC2 instance, including the public-hostname, public-ipv4, public-keys et cetera.

This can be helpful when you're writing scripts to run from your instance as it enables you to access the local IP address of your instance from the instance metadata to manage a connection to an external application. Remember that you are not billed for HTTP requests used to retrieve instance metadata and user data.

 

Reference:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

 

Check out this Amazon EC2 Cheat Sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-compute-cloud-amazon-ec2/

Question 64: Skipped

You are designing a banking portal which uses Amazon ElastiCache for Redis as its distributed session management component. Since the other Cloud Engineers in your department have access to your ElastiCache cluster, you have to secure the session data in the portal by requiring them to enter a password before they are granted permission to execute Redis commands.   

As the Solutions Architect, which of the following should you do to meet the above requirement? 

Explanation

Using Redis AUTH command can improve data security by requiring the user to enter a password before they are granted permission to execute Redis commands on a password-protected Redis server. Hence, Option 3 is the correct answer.

To require that users enter a password on a password-protected Redis server, include the parameter --auth-token with the correct password when you create your replication group or cluster and on all subsequent commands to the replication group or cluster.

 

 

Option 1 is incorrect because this is not possible in IAM. You have to use the Redis AUTH option instead.

Option 2 is incorrect because the Redis At-Rest Encryption feature only secures the data inside the in-memory data store. You have to use Redis AUTH option instead.

Option 4 is incorrect because although in-transit encryption is part of the solution, it is missing the most important thing which is the Redis AUTH option.

 

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

 

Check out this Amazon Elasticache cheat sheet:

https://tutorialsdojo.com/aws-cheat-sheet-amazon-elasticache/

Question 65: Skipped

You are building a microservices architecture in which a software is composed of small independent services that communicate over well-defined APIs. In building large-scale systems, fine-grained decoupling of microservices is a recommended practice to implement. The decoupled services should scale horizontally from each other to improve scalability.

What is the difference between Horizontal scaling and Vertical scaling?

Explanation

Vertical scaling means running the same software on bigger machines which is limited by the capacity of the individual server. Horizontal scaling is adding more servers to the existing pool and doesn’t run into limitations of individual servers.


 

Fine-grained decoupling of microservices is a best practice for building large-scale systems. It’s a prerequisite for performance optimization since it allows choosing the appropriate and optimal technologies for a specific service. Each service can be implemented with the appropriate programming languages and frameworks, leverage the optimal data persistence solution, and be fine-tuned with the best performing service configurations.

Properly decoupled services can be scaled horizontally and independently from each other. Vertical scaling, which is running the same software on bigger machines, is limited by the capacity of individual servers and can incur downtime during the scaling process. Horizontal scaling, which is adding more servers to the existing pool, is highly dynamic and doesn’t run into limitations of individual servers. The scaling process can be completely automated.

Furthermore, the resiliency of the application can be improved because failing components can be easily and automatically replaced. Hence, Option 3 is the correct answer.

Option 1 is incorrect because Vertical scaling is not about running the same software on a fully serverless architecture. AWS Lambda is not required for scaling.

Option 2 is incorrect because the definitions for the two concepts were switched. Vertical scaling means running the same software on bigger machines which is limited by the capacity of the individual server. Horizontal scaling is adding more servers to the existing pool and doesn’t run into limitations of individual servers.

Option 4 is incorrect because Horizontal scaling is not related to using ECS or EKS containers on a smaller instance.

 

Reference:

https://docs.aws.amazon.com/aws-technical-content/latest/microservices-on-aws/microservices-on-aws.pdf#page=8

About this course

Sampler of 130 questions in 2 Practice Tests with Full Explanations, Reference Links, and Score Tracking

By the numbers
Skill level: Beginner Level
Students: 18258
Languages: English
Captions: No
Description

**Patterned after the latest exam format and updated regularly based on feedback of our 13000+ students on what appeared in the actual exam. Our practice tests are TOP NOTCH, as demonstrated by the 500++ reviews on our course**

================================================

This is a sampler of our AWS Certified Solutions Architect Associate Practice Exams, which is the complete version with 390 Unique questions and 6 Practice Tests. Patterned after the latest exam version, this sampler only contains 2 practice tests out of the 6 tests available from the complete version but nevertheless, this will still help you prepare for your AWS certification exam. And quite frankly, there are a lot of Practice Tests here in Udemy that rip you off with hundreds of dollars but do not even provide a sampler of 130 unique questions like this! 

And if you like this sampler, feel free to check out our AWS Certified Solutions Architect Associate Practice Exams with the complete 390 questions in 6 practice tests. We are confident that this sampler, as well as our complete AWS Practice Exams, will help you pass your certification exam without spending too much money on practice tests!

================================================

AWS Certified Solutions Architect Associate ( SAA-C01 ) is consistently among the top paying IT certifications, considering that Amazon Web Services is the leading cloud services platform in the world with almost 50% market share! Earn over $150,000 per year with an AWS certification!

But before you become an AWS Certified Solutions Architect Professional, you have to pass the Associate exam first and this is where the AWS practice tests come in. It is possible that you have read all of the available AWS documentations online yet still fail the exam! These AWS practice tests simulate the actual certification exam and ensure that you indeed understand the subject matter. 

Some people are using brain dumps for the certification exam which is totally absurd and highly unprofessional because these dumps will not only hinder you to attain an in-depth AWS knowledge, these can result in you failing the actual AWS exam since Amazon regularly updates the exam coverage.

Please also note that these AWS practice tests are not brain dumps and since Amazon shuffles the actual exam content from a question bank with 500++ questions, it is nearly impossible to match what you can see here with the actual practice tests. Again, the key to passing the exam is a good understanding of AWS services and this is what our AWS Certified Solutions Architect Associate practice tests are meant to do.

As mentioned above, this is just a sampler of the AWS Certified Solutions Architect Associate Practice Exams, which offers the following:

  • Simulates the actual, and the latest, AWS Solutions Architect Associate certification exam

  • Has 2 Practice Tests with 65 UNIQUE questions each with a 130-minute time limit

  • A total of 130 Unique questions to help you pass and even ace the AWS exam!

  • Has full and comprehensive explanations on each question.

  • Has complete Reference Links so you can check and verify yourself that the answers are correct.

  • Contains a Test Report to track your progress and show you which Knowledge Area you need improvement.

  • Mobile-compatible so you can conveniently review everywhere, anytime with your smartphone!

  • Has a better value than the official AWS Practice Test which is worth about $20 but only contains about 20 - 40 questions.

  • Clear and Error-Free Questions! Each item has a reference link that can validate the answer but you can also post to the QA section so we can discuss any issues.

  • Prepared by a Certified Solutions Architect Professional who has actually passed the exam! (Please see my LinkedIn profile to view my AWS Certificate)

  • These practice exams are designed to focus on the important exam topics (such as EC2, EBS, S3 and many others) hence, the aforementioned topics have more questions than the other knowledge area. The number of questions in each topic are carefully selected based on the 5 domains of the actual AWS certification exam. Out of the 5 domains, the Domain 1: Design Resilient Architectures has the highest percentage (34%) with topics such as reliable and/or resilient storage; how to design decoupling mechanisms; determine how to design multi-tier/ high availability/ fault tolerant architectures using EC2, EBS, S3 and many others. Note that although there is a focus on these topics, the questions are all still unique, to ensure that you fully grasp the topic.


There are a lot of existing AWS Practice Tests in the market however, most of them contain both technical and grammatical errors that may cause you to fail the actual exam. There are also official certification practice exams provided by AWS but these only have 20 or 40 questions and cost 20 or 30 USD -- a price that is comparable with having this 390 Unique and Timed Amazon Web Services practice questions! 

When I was reviewing for my AWS Certified Solutions Architect Associate exam, I had a hard time finding comprehensive practice tests to help me pass my exam. I bought some of them in the market but I was disappointed because there are a lot of technical and grammatical errors in the questions. This is why I created these practice tests to help my fellow IT professionals in the industry. 

We gave a considerable amount of effort to create and publish these practice tests, including the laborious task of checking each items for any errors. We are confident that this will significantly help you pass your exam. All the best!


IMPORTANT NOTE 

These practice exams have a passing score of 72% but I highly encourage you to repeat taking these exams again and again until you consistently reach a score of 90% or higher on each exam. Note that the AWS Certification passing score is not published by Amazon as it is set by using statistical analysis which may change without notice.

Remember that using this product alone does not guarantee you will pass the exam as you still need to do your own readings and hands-on exercises in AWS. Nonetheless, these practice exams provide a comprehensive assessment on which knowledge area you need improvement and even help you achieve a higher score!

What you’ll learn

  • Become an AWS Certified Solutions Architect - Associate
  • Learn the AWS concepts in-depth with the Comprehensive Explanations included in each answers.
  • Validate your answers and do further readings with helpful Reference Links
  • Take the Practice Exams again and again, unlike the AWS-provided practice exam that you can only do once.

Are there any course requirements or prerequisites?

  • If you have a computer or a smartphone then you are ready to go!

Who this course is for:

  • For those who are about to take the AWS Certified Solutions Architect - Associate exam
Instructor
User photo

AWS CCP, Developer, Solutions Architect, SysOps, DevOps

Hello! I'm Jon, a Filipino Full Stack software developer based in Sydney, Australia, with over a decade of diversified experience in Banking, Financial Services, and Telecommunications industries. I'm an Amazon Certified Solutions Architect and DevOps Professional and have been working with various cloud services such as Google Cloud Platform and Microsoft Azure. I was also employed by top tech and finance companies such as HP, Accenture, Telstra, Macquarie Bank, and News Corp to name a few.

When I'm not in front of the screen, I like to play the piano and guitar but my greatest passion is teaching, as I believe that quality education is the greatest equalizer. Coming from my humble beginnings in the Philippines, I know how hard life can be if you don't have the technical knowledge and skill set needed to have a well-paying job. I am not ashamed to publicly tell you that I was born in the slums of Metro Manila and I didn't excel in my studies. Growing up, we were selling rice, canned goods, poultry supplies and other stuff in our small variety store (also called 'sari-sari' store in Filipino).

But with hard work, determination and unrelenting persistence, I was able to turn my life around and help my family in return. So aside from the technical knowledge that you may learn from my course, I also promote having these life values. Even though I failed so much before in life, I DID NOT QUIT and persisted to further improve my skills and myself too. I was able to work as an IT Professional for almost 4 years in Manila and 3 years in Singapore before migrating here in Australia on my own. The salary that I earned from my IT career is truly life-changing and this is what I want you to have as well!

I am passionate at what I do and I dedicate a lot of my time creating educational courses while juggling a full-time job here in Sydney. I had given IT seminars to different universities and conferences in the Philippines for free and have launched various educational websites as well, using my own money - without any external funding. 

Having said all of this, I invite you to check out my courses here in Udemy and kindly let me know how I can further improve my products. Your constructive feedback is a powerful tool not just to help me, but to also help other people who may have been struggling to learn the subject matter. 

Thank you and I hope to hear from you soon.

Instructor
User photo

AWS Training and Certification Reviewers

Tutorials Dojo is your one-stop learning portal for technology-related topics, empowering you to upgrade your skills and your career.

Since 2016, we have been providing high-quality educational resources to help both students and professionals reach their full potential.

We offer a wealth of tutorials on various tech topics. Start learning with us now!